翻訳と辞書
Words near each other
・ "O" Is for Outlaw
・ "O"-Jung.Ban.Hap.
・ "Ode-to-Napoleon" hexachord
・ "Oh Yeah!" Live
・ "Our Contemporary" regional art exhibition (Leningrad, 1975)
・ "P" Is for Peril
・ "Pimpernel" Smith
・ "Polish death camp" controversy
・ "Pro knigi" ("About books")
・ "Prosopa" Greek Television Awards
・ "Pussy Cats" Starring the Walkmen
・ "Q" Is for Quarry
・ "R" Is for Ricochet
・ "R" The King (2016 film)
・ "Rags" Ragland
・ ! (album)
・ ! (disambiguation)
・ !!
・ !!!
・ !!! (album)
・ !!Destroy-Oh-Boy!!
・ !Action Pact!
・ !Arriba! La Pachanga
・ !Hero
・ !Hero (album)
・ !Kung language
・ !Oka Tokat
・ !PAUS3
・ !T.O.O.H.!
・ !Women Art Revolution


Dictionary Lists
翻訳と辞書 辞書検索 [ 開発暫定版 ]
スポンサード リンク

visual servoing : ウィキペディア英語版
visual servoing
Visual servoing, also known as vision-based robot control and abbreviated VS, is a technique which uses feedback information extracted from a vision sensor (visual feedback〔(【引用サイトリンク】url=http://www.k2.t.u-tokyo.ac.jp/tech_terms/index-e.html#VisualFeedback )〕) to control the motion of a robot. One of the earliest papers that talks about visual servoing was from the SRI International Labs in 1979.〔Agin, G.J., "Real Time Control of a Robot with a Mobile Camera". Technical Note 179, SRI International, Feb. 1979.〕
==Visual servoing taxonomy==

''There are two fundamental configurations of the robot end-effector (hand) and the camera:''
* Eye-in-hand, or end-point closed-loop control, where the camera is attached to the moving hand and observing the relative position of the target.
* Hand to eye, or end-point open-loop control, where the camera is fixed in the world and observing the target and the motion of the hand.
Visual Servoing control techniques are broadly classified into the following types:〔S. A. Hutchinson, G. D. Hager, and P. I. Corke. (A tutorial on visual servo control. ) IEEE Trans. Robot. Automat., 12(5):651--670, Oct. 1996.〕〔F. Chaumette, S. Hutchinson. ( Visual Servo Control, Part I: Basic Approaches. ) IEEE Robotics and Automation Magazine, 13(4):82-90, December 2006〕
*Image-based (IBVS)
*Position/pose-based (PBVS)
*Hybrid approach
IBVS was proposed by Weiss and Sanderson.〔A. C. Sanderson and L. E. Weiss. Adaptive visual servo control of robots. In A. Pugh, editor, Robot Vision, pages 107–116. IFS, 1983〕 The control law is based on the error between current and desired features on the image plane, and does not involve any estimate of the pose of the target. The features may be the coordinates of visual features, lines or moments of regions. IBVS has difficulties〔F. Chaumette. Potential problems of stability and convergence in image-based and position-based visual servoing. In D. Kriegman, G. Hager, and S. Morse, editors, The confluence of vision and control, volume 237 of Lecture Notes in Control and Information Sciences, pages 66–78. Springer-Verlag, 1998.〕 with motions very large rotations, which has come to be called camera retreat.〔
PBVS is a model-based technique (with a single camera). This is because the pose of the object of interest is estimated with respect to the camera and then a command is issued to the robot controller, which in turn controls the robot. In this case the image features are extracted as well, but are additionally used to estimate 3D information (pose of the object in Cartesian space), hence it is servoing in 3D.
Hybrid approaches use some combination of the 2D and 3D servoing. There have been a few different approaches to hybrid servoing
* 2-1/2-D Servoing〔E. Malis, F. Chaumette and S. Boudet, 2.5 D visual servoing, IEEE Transactions on Robotics and Automation, 15(2):238-250, 1999〕
* Motion partition-based
* Partitioned DOF Based

抄文引用元・出典: フリー百科事典『 ウィキペディア(Wikipedia)
ウィキペディアで「visual servoing」の詳細全文を読む



スポンサード リンク
翻訳と辞書 : 翻訳のためのインターネットリソース

Copyright(C) kotoba.ne.jp 1997-2016. All Rights Reserved.